We established that Time Complexity is the most critical dimension governing algorithmic efficiency, often forcing a trade-off with space usage. To accurately compare algorithms, we rely on Big O notation ($O(...)$) to quantify how performance scales.

  • Big O notation describes the upper bound (worst-case scenario) of an algorithm's running time as the input size ($n$) approaches infinity. This provides an abstraction that focuses only on the rate of scaling.
  • When analyzing sorting algorithms, we measure the number of fundamental operations (like comparisons and swaps) required to transform input $A$ into sorted output $B$.
  • We ignore constants and low-order terms because for very large inputs ($n$), the highest growth rate term dominates the execution time.
  • A complexity of $O(n^2)$ will always become exponentially slower than $O(n \log n)$, regardless of implementation constants, once $n$ is sufficiently large.
  • Our primary goal is to find algorithms that approach the theoretical limit for comparison-based sorting, which is $O(n \log n)$.
Big O Notation Name (Scaling) Sorting Context Example
$O(1)$ Constant Independent of input size $n$. Accessing element $A[i]$
$O(n)$ Linear Complexity grows proportional to $n$. Finding the minimum value in $A$
$O(n \log n)$ Log-Linear Optimal for Comparison Sorts. Efficient scaling. Merge Sort, Quick Sort (Avg.)
$O(n^2)$ Quadratic Complexity grows very quickly. Standard for simple sorts. Bubble Sort, Insertion Sort